149 research outputs found

    Towards learning inverse kinematics with a neural network based tracking controller

    Get PDF
    Learning an inverse kinematic model of a robot is a well studied subject. However, achieving this without information about the geometric characteristics of the robot is less investigated. In this work, a novel control approach is presented based on a recurrent neural network. Without any prior knowledge about the robot, this control strategy learns to control the iCub’s robot arm online by solving the inverse kinematic problem in its control region. Because of its exploration strategy the robot starts to learn by generating and observing random motor behavior. The modulation and generalization capabilities of this approach are investigated as well

    Towards a neural hierarchy of time scales for motor control

    Get PDF
    Animals show remarkable rich motion skills which are still far from realizable with robots. Inspired by the neural circuits which generate rhythmic motion patterns in the spinal cord of all vertebrates, one main research direction points towards the use of central pattern generators in robots. On of the key advantages of this, is that the dimensionality of the control problem is reduced. In this work we investigate this further by introducing a multi-timescale control hierarchy with at its core a hierarchy of recurrent neural networks. By means of some robot experiments, we demonstrate that this hierarchy can embed any rhythmic motor signal by imitation learning. Furthermore, the proposed hierarchy allows the tracking of several high level motion properties (e.g.: amplitude and offset), which are usually observed at a slower rate than the generated motion. Although these experiments are preliminary, the results are promising and have the potential to open the door for rich motor skills and advanced control

    A new theoretical framework jointly explains behavioral and neural variability across subjects performing flexible decision-making

    Full text link
    The ability to flexibly select and accumulate relevant information to form decisions, while ignoring irrelevant information, is a fundamental component of higher cognition. Yet its neural mechanisms remain unclear. Here we demonstrate that, under assumptions supported by both monkey and rat data, the space of possible network mechanisms to implement this ability is spanned by the combination of three different components, each with specific behavioral and anatomical implications. We further show that existing electrophysiological and modeling data are compatible with the full variety of possible combinations of these components, suggesting that different individuals could use different component combinations. To study variations across subjects, we developed a rat task requiring context-dependent evidence accumulation, and trained many subjects on it. Our task delivers sensory evidence through pulses that have random but precisely known timing, providing high statistical power to characterize each individual’s neural and behavioral responses. Consistent with theoretical predictions, neural and behavioral analysis revealed remarkable heterogeneity across rats, despite uniformly good task performance. The theory further predicts a specific link between behavioral and neural signatures, which was robustly supported in the data. Our results provide a new experimentally-supported theoretical framework to analyze biological and artificial systems performing flexible decision-making tasks, and open the door to the study of individual variability in neural computations underlying higher cognition

    ReservoirPy: an Efficient and User-Friendly Library to Design Echo State Networks

    Get PDF
    International audienceWe present a simple user-friendly library called ReservoirPy based on Python scientific modules. It provides a flexible interface to implement efficient Reservoir Computing (RC) architectures with a particular focus on Echo State Networks (ESN). Advanced features of ReservoirPy allow to improve up to 87.9% of computation time efficiency on a simple laptop compared to basic Python implementation. Overall, we provide tutorials for hyperparameters tuning, offline and online training, fast spectral initialization, parallel and sparse matrix computation on various tasks (MackeyGlass and audio recognition tasks). In particular, we provide graphical tools to easily explore hyperparameters using random search with the help of the hyperopt library

    Transferring Learning from External to Internal Weights in Echo-State Networks with Sparse Connectivity

    Get PDF
    Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this “transfer of learning” is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a “self-sensing” network state, and we compare and contrast this with compressed sensing

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset
    • 

    corecore